Inhomogeneous Markov Chain Approach To Probabilistic Swarm Guidance Algorithms
نویسندگان
چکیده
Small satellites are well suited for formation flying missions, where multiple satellites operate together in a cluster or predefined geometry to accomplish the task of a single conventional large satellite. In comparison with traditional large satellites, small satellites are modular in nature and offer low development cost by enabling rapid manufacturing using commercial-off-the-shelf components. Flight of swarms of hundreds to thousands of femtosatellites (100-gram-class satellites) for Synthetic Aperture applications has been discussed in [1]. A probabilistic guidance approach, which provides a method for each agent to determine its own trajectory such that the overall swarm converges to a desired distribution, while maintaining no communication between agents, is discussed in [2]. Instead of allocating agent positions ahead of time, probabilistic guidance is based on designing a homogeneous Markov chain, such that the steady-state distribution corresponds to the desired swarm density. Although each agent propagates its position in a statistically independent manner, the swarm asymptotically converges to the desired steady-state distribution associated with the homogeneous Markov chain and also automatically repairs any damage. Similar study on self-organization of swarms using homogeneous Markov chains has been done in [3]. The desired Markov matrices, to guide individual swarm agents in a completely decentralized fashion, are synthesized using the Metropolis-Hastings algorithm [4]. The limitations of probabilistic guidance using homogeneous Markov chains are (i) the agents are not allowed to settle down even after the swarm has reached the desired steady-state distribution resulting in significant fuel loss, and (ii) only asymptotic guarantees of convergence are provided. This paper develops probabilistic swarm guidance algorithms using inhomogeneous Markov chains to address these limitations. In order to achieve these objectives, it is necessary that each agent senses the agents in its surroundings and also communicate with its neighboring agents. The motion of agents in the swarm can be viewed as analogous to the random motion of molecules in an ideal gas. Just as the temperature dictates the motion of molecules in a gas; this paper uses the Kullback–Leibler (KL) divergence between the current swarm distribution and the desired steady-state distribution, to dictate the motion of agents in the swarm. The KL divergence is a non-symmetric measure of the difference between two probability distributions [5]. Each agent senses the agents in its surroundings and makes a localized guess of the current swarm distribution. Using the Bayesian Consensus Filtering algorithm, the agents reach a consensus regarding the global current swarm distribution [6]. The next step involves synthesizing a series of inhomogeneous Markov matrices, such that as the KL divergence decreases, the Markov matrices tend towards an identity matrix. The conditions for existence of such series of inhomogeneous Markov matrices, called ‘hardening-position scheme’, is discussed in the [7]. In essence, when the KL divergence between the current swarm distribution and the desired steady-state distribution is large, each agent propagates its position in a statistically-independent manner, and the swarm tends toward the desired steady-state distribution. When this KL divergence is small, the Markov matrix tends towards an identity matrix and each agent holds its own position. Figure 1: Fuel usage for different probabilistic swarm guidance
منابع مشابه
Iwscff-2015 -xx-xx Feedback-based Inhomogeneous Markov Chain Approach to Probabilistic Swarm Guidance
This paper presents a novel and generic distributed swarm guidance algorithm using inhomogeneous Markov chains that guarantees superior performance over existing homogeneous Markov chain based algorithms, when the feedback of the current swarm distribution is available. The probabilistic swarm guidance using inhomogeneous Markov chain (PSG–IMC) algorithm guarantees sharper and faster convergenc...
متن کاملProbabilistic Guidance of Swarms using Sequential Convex Programming∗
In this paper, we integrate, implement, and validate formation flying algorithms for large number of agents using probabilistic swarm guidance with inhomogeneous Markov chains and model predictive control with sequential convex programming. Using an inhomogeneous Markov chain, each agent determines its target position during each time step in a statistically independent manner while the swarm c...
متن کاملEfficient Probabilistic Parameter Synthesis for Adaptive Systems
Probabilistic modelling has proved useful to analyseperformance, reliability and energy usage of distributed ornetworked systems. We consider parametric probabilistic models,in which probabilities are specified as expressions over a setof parameters, rather than concrete values. We address theparameter synthesis problem for parametric Markov decisionprocesses and paramet...
متن کاملDecentralized probabilistic density control of autonomous swarms with safety constraints
This paper presents a Markov chain based approach for the probabilistic density control of a large number, swarm, of autonomous agents. The proposed approach specifies the time evolution of the probabilistic density distribution by using a Markov chain, which guides the swarm to a desired steady-state distribution, while satisfying the prescribed ergodicity, motion, and safety constraints. This...
متن کاملEvaluation of First and Second Markov Chains Sensitivity and Specificity as Statistical Approach for Prediction of Sequences of Genes in Virus Double Strand DNA Genomes
Growing amount of information on biological sequences has made application of statistical approaches necessary for modeling and estimation of their functions. In this paper, sensitivity and specificity of the first and second Markov chains for prediction of genes was evaluated using the complete double stranded DNA virus. There were two approaches for prediction of each Markov Model parameter,...
متن کامل